Goto

Collaborating Authors

 Eastern Norway


OSLO: One-Shot Label-Only Membership Inference Attacks

Neural Information Processing Systems

We introduce One-Shot Label-Only (OSLO) membership inference attacks (MIAs), which accurately infer a given sample's membership in a target model's training set with high precision using just a single query, where the target model only returns the predicted hard label. This is in contrast to state-of-the-art label-only attacks which require 6000 queries, yet get attack precisions lower than OSLO's.


Oslo University Hospital, University of Oslo, UiT The Arctic University of Norway andrekle@ifi.uio.no

Neural Information Processing Systems

Domain generalisation in computational histopathology is challenging because the images are substantially affected by differences among hospitals due to factors like fixation and staining of tissue and imaging equipment. We hypothesise that focusing on nuclei can improve the out-of-domain (OOD) generalisation in cancer detection. We propose a simple approach to improve OOD generalisation for cancer detection by focusing on nuclear morphology and organisation, as these are domain-invariant features critical in cancer detection.


OSLO: One-Shot Label-Only Membership Inference Attacks

Neural Information Processing Systems

We introduce One-Shot Label-Only (OSLO) membership inference attacks (MIAs), which accurately infer a given sample's membership in a target model's training set with high precision using just a single query, where the target model only returns the predicted hard label. This is in contrast to state-of-the-art label-only attacks which require \sim6000 queries, yet get attack precisions lower than OSLO's. The core idea is that a member sample exhibits more resistance to adversarial perturbations than a non-member. We compare OSLO against state-of-the-art label-only attacks and demonstrate that, despite requiring only one query, our method significantly outperforms previous attacks in terms of precision and true positive rate (TPR) under the same false positive rates (FPR). For example, compared to previous label-only MIAs, OSLO achieves a TPR that is at least 7 \times higher under a 1\% FPR and at least 22 \times higher under a 0.1\% FPR on CIFAR100 for a ResNet18 model.


The 200 Android vs. the 1,000 iPhone: How our digital divide keeps growing

ZDNet

On one screen, an urban professional in Oslo taps through ultra-secure banking apps, relies on an AI-powered personal assistant, and streams media seamlessly over high-speed 5G using their iPhone. On the other screen, a farmer in Malawi scrolls through a modest Android phone -- likely costing less than a week's wages -- just to read the news, check tomorrow's weather, and send WhatsApp messages over a patchy mobile connection. These very different experiences highlight the divide between the Global North and the Global South. These terms refer not only to geographic locations but also to the world's wealthiest and most industrialized regions -- such as Europe, North America, and parts of East Asia -- and economically developing nations across much of Africa, Latin America, South Asia, and Oceania. Technology symbolizes innovation, convenience, and seamless connectivity in the Global North.


OSLO: One-Shot Label-Only Membership Inference Attacks

arXiv.org Artificial Intelligence

We introduce One-Shot Label-Only (OSLO) membership inference attacks (MIAs), which accurately infer a given sample's membership in a target model's training set with high precision using just \emph{a single query}, where the target model only returns the predicted hard label. This is in contrast to state-of-the-art label-only attacks which require $\sim6000$ queries, yet get attack precisions lower than OSLO's. OSLO leverages transfer-based black-box adversarial attacks. The core idea is that a member sample exhibits more resistance to adversarial perturbations than a non-member. We compare OSLO against state-of-the-art label-only attacks and demonstrate that, despite requiring only one query, our method significantly outperforms previous attacks in terms of precision and true positive rate (TPR) under the same false positive rates (FPR). For example, compared to previous label-only MIAs, OSLO achieves a TPR that is 7$\times$ to 28$\times$ stronger under a 0.1\% FPR on CIFAR10 for a ResNet model. We evaluated multiple defense mechanisms against OSLO.


Deep Learning-based Intraoperative MRI Reconstruction

arXiv.org Artificial Intelligence

Purpose: To evaluate the quality of deep learning reconstruction for prospectively accelerated intraoperative magnetic resonance imaging (iMRI) during resective brain tumor surgery. Materials and Methods: Accelerated iMRI was performed during brain surgery using dual surface coils positioned around the area of resection. A deep learning (DL) model was trained on the fastMRI neuro dataset to mimic the data from the iMRI protocol. Evaluation was performed on imaging material from 40 patients imaged between 01.11.2021 - 01.06.2023 that underwent iMRI during tumor resection surgery. A comparative analysis was conducted between the conventional compressed sense (CS) method and the trained DL reconstruction method. Blinded evaluation of multiple image quality metrics was performed by two working neuro-radiologists and a working neurosurgeon on a 1 to 5 Likert scale (1=non diagnostic, 2=poor, 3=acceptable, 4=good, 5=excellent), and the favored reconstruction variant. Results: The DL reconstruction was strongly favored or favored over the CS reconstruction for 33/40, 39/40, and 8/40 of cases for reader 1, 2, and 3, respectively. Two of three readers consistently assigned higher ratings for the DL reconstructions, and the DL reconstructions had a higher score than their respective CS counterparts for 72%, 72%, and 14% of the cases for reader 1, 2, and 3, respectively. Still, the DL reconstructions exhibited shortcomings such as a striping artifact and reduced signal. Conclusion: DL shows promise to allow for high-quality reconstructions of intraoperative MRI with equal to or improved perceived spatial resolution, signal-to-noise ratio, diagnostic confidence, diagnostic conspicuity, and spatial resolution compared to compressed sense.


Uncertainty quantification in automated valuation models with locally weighted conformal prediction

arXiv.org Machine Learning

Non-parametric machine learning models, such as random forests and gradient boosted trees, are frequently used to estimate house prices due to their predictive accuracy, but such methods are often limited in their ability to quantify prediction uncertainty. Conformal Prediction (CP) is a model-agnostic framework for constructing confidence sets around machine learning prediction models with minimal assumptions. However, due to the spatial dependencies observed in house prices, direct application of CP leads to confidence sets that are not calibrated everywhere, i.e., too large of confidence sets in certain geographical regions and too small in others. We survey various approaches to adjust the CP confidence set to account for this and demonstrate their performance on a data set from the housing market in Oslo, Norway. Our findings indicate that calibrating the confidence sets on a \textit{locally weighted} version of the non-conformity scores makes the coverage more consistently calibrated in different geographical regions. We also perform a simulation study on synthetically generated sale prices to empirically explore the performance of CP on housing market data under idealized conditions with known data-generating mechanisms.


Retrieval-Generation Synergy Augmented Large Language Models

arXiv.org Artificial Intelligence

Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks. However, regarding how to obtain effective documents, the existing methods are mainly divided into two categories. One is to retrieve from an external knowledge base, and the other is to utilize large language models to generate documents. We propose an iterative retrieval-generation collaborative framework. It is not only able to leverage both parametric and non-parametric knowledge, but also helps to find the correct reasoning path through retrieval-generation interactions, which is very important for tasks that require multi-step reasoning. We conduct experiments on four question answering datasets, including single-hop QA and multi-hop QA tasks. Empirical results show that our method significantly improves the reasoning ability of large language models and outperforms previous baselines.


A 3D explainability framework to uncover learning patterns and crucial sub-regions in variable sulci recognition

arXiv.org Artificial Intelligence

A B S T R A C T Precisely identifying sulcal features in brain MRI is made challenging by the variability of brain folding. This research introduces an innovative 3D explainability frame-work that validates outputs from deep learning networks in their ability to detect the paracin-gulate sulcus, an anatomical feature that may or may not be present on the frontal medial surface of the human brain. This study trained and tested two networks, amalgamating local explainability techniques GradCam and SHAP with a dimensionality reduction method. The explainability framework provided both localized and global explanations, along with accuracy of classification results, revealing pertinent sub-regions contributing to the decision process through a post-fusion transformation of explanatory and statistical features. Leveraging the TOP-OSLO dataset of MRI acquired from patients with schizophrenia, greater accuracies of paracingulate sulcus detection (presence or absence) were found in the left compared to right hemispheres with distinct, but extensive sub-regions contributing to each classification outcome. The study also inadvertently highlighted the critical role of an unbiased annotation protocol in maintaining network performance fairness. Our proposed method not only o ff ers automated, impartial annotations of a variable sulcus but also provides insights into the broader anatomical variations associated with its presence throughout the brain. The adoption of this methodology holds promise for instigating further explorations and inquiries in the field of neuroscience.1. Introduction While the folding of the primary sulci of the human brain, formed during gestation, is broadly stable across individuals, the secondary sulci which continue to develop post-natally are unique to each individual. Inter-individual variability poses a significant challenge for the detection and accurately annotation of sulcal features from MRI of the brain. Undertaking this task manually is time-consuming with outcomes that depend on the rater. This prevents the e fficient leveraging of the large, open-access MRI databases that are available. While primary sulci can be very accurately detected with automated methods, secondary sulci pose a more di fficult computational problem due to their higher variability in shape and indeed presence or absense [3]. A successful automated method would facilitate investigations of brain folding variation, representative of events occurring during a critical developmental period. Furthermore, generalized and unbiased annotations would make tractable large-scale studies of cognitive and behavioral development, and the emergence of mental and neurological disorders with high levels of statistical power. The folding of the brain has been linked to brain function, and some specific folding patterns have been related to susceptibility to neurological adversities [20].


Open-Set Likelihood Maximization for Few-Shot Learning

arXiv.org Artificial Intelligence

We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i.e. classifying instances among a set of classes for which we only have a few labeled samples, while simultaneously detecting instances that do not belong to any known class. We explore the popular transductive setting, which leverages the unlabelled query instances at inference. Motivated by the observation that existing transductive methods perform poorly in open-set scenarios, we propose a generalization of the maximum likelihood principle, in which latent scores down-weighing the influence of potential outliers are introduced alongside the usual parametric model. Our formulation embeds supervision constraints from the support set and additional penalties discouraging overconfident predictions on the query set. We proceed with a block-coordinate descent, with the latent scores and parametric model co-optimized alternately, thereby benefiting from each other. We call our resulting formulation \textit{Open-Set Likelihood Optimization} (OSLO). OSLO is interpretable and fully modular; it can be applied on top of any pre-trained model seamlessly. Through extensive experiments, we show that our method surpasses existing inductive and transductive methods on both aspects of open-set recognition, namely inlier classification and outlier detection.